Deploying BRX Applications
This guide covers best practices and strategies for deploying BRX applications to production environments. Whether you’re building a simple API integration or a complex AI application, these guidelines will help you deploy your BRX applications reliably and securely.
Deployment Considerations
When deploying BRX applications, consider the following factors:
Environment Management
- Development: For building and testing new features
- Staging: For pre-production testing in a production-like environment
- Production: For serving real users
Each environment should have its own:
- BRX API keys
- Configuration settings
- Resource allocations
- Monitoring setup
API Key Management
BRX API keys should be managed securely:
- Use different API keys for different environments
- Restrict API key permissions based on the principle of least privilege
- Rotate API keys regularly
- Never hardcode API keys in your application code
- Use environment variables or a secure key management service
Example using environment variables:
import BRX from 'brx-node';
// Load API key from environment variable
const apiKey = process.env.BRX_API_KEY;
if (!apiKey) {
throw new Error('BRX_API_KEY environment variable is required');
}
// Initialize BRX client
const brx = new BRX(apiKey);
Resource Planning
Plan your resource needs based on:
- Expected request volume
- Complexity of your BRKs
- Response time requirements
- Budget constraints
Consider implementing:
- Caching for frequently used BRK results
- Rate limiting to prevent abuse
- Horizontal scaling for handling increased load
Deployment Architectures
Serverless Deployment
Serverless deployment is ideal for event-driven BRX applications with variable load:
AWS Lambda Example
// handler.js
import BRX, { BRK } from 'brx-node';
export const handler = async (event) => {
try {
// Initialize BRX client
const brx = new BRX(process.env.BRX_API_KEY);
// Parse request
const { brkId, inputs } = JSON.parse(event.body);
// Fetch BRK schema
const brkSchema = await brx.get(brkId);
const myBrk = new BRK(brkSchema);
// Set inputs
for (const [key, value] of Object.entries(inputs)) {
myBrk.input[key] = value;
}
// Execute BRK
const result = await brx.run(myBrk);
// Return response
return {
statusCode: 200,
body: JSON.stringify({
success: true,
result: result[0].brxRes.output
})
};
} catch (error) {
return {
statusCode: 500,
body: JSON.stringify({
success: false,
error: error.message
})
};
}
};
Serverless Framework Configuration
# serverless.yml
service: brx-application
provider:
name: aws
runtime: nodejs14.x
environment:
BRX_API_KEY: ${env:BRX_API_KEY}
functions:
executeBrk:
handler: handler.handler
events:
- http:
path: execute
method: post
cors: true
Container-Based Deployment
Container-based deployment is suitable for more complex BRX applications:
Dockerfile
FROM node:14-alpine
WORKDIR /app
# Copy package files
COPY package*.json ./
# Install dependencies
RUN npm ci --only=production
# Copy application code
COPY . .
# Set environment variables
ENV NODE_ENV=production
# Expose port
EXPOSE 3000
# Start application
CMD ["node", "server.js"]
Docker Compose
# docker-compose.yml
version: '3'
services:
brx-app:
build: .
ports:
- "3000:3000"
environment:
- NODE_ENV=production
- BRX_API_KEY=${BRX_API_KEY}
restart: always
Traditional Server Deployment
For applications with stable load patterns, traditional server deployment may be appropriate:
Express.js Server
// server.js
import express from 'express';
import BRX, { BRK } from 'brx-node';
const app = express();
const port = process.env.PORT || 3000;
// Initialize BRX client
const brx = new BRX(process.env.BRX_API_KEY);
// Middleware
app.use(express.json());
// Execute BRK endpoint
app.post('/execute', async (req, res) => {
try {
const { brkId, inputs } = req.body;
// Fetch BRK schema
const brkSchema = await brx.get(brkId);
const myBrk = new BRK(brkSchema);
// Set inputs
for (const [key, value] of Object.entries(inputs)) {
myBrk.input[key] = value;
}
// Execute BRK
const result = await brx.run(myBrk);
// Return response
res.json({
success: true,
result: result[0].brxRes.output
});
} catch (error) {
res.status(500).json({
success: false,
error: error.message
});
}
});
// Start server
app.listen(port, () => {
console.log(`Server running on port ${port}`);
});
PM2 Process Management
// ecosystem.config.js
module.exports = {
apps: [{
name: "brx-app",
script: "server.js",
instances: "max",
exec_mode: "cluster",
env: {
NODE_ENV: "production",
BRX_API_KEY: process.env.BRX_API_KEY
}
}]
}
Continuous Integration and Deployment (CI/CD)
Implementing CI/CD for BRX applications streamlines the deployment process:
GitHub Actions Example
# .github/workflows/deploy.yml
name: Deploy BRX Application
on:
push:
branches: [ main ]
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- name: Use Node.js
uses: actions/setup-node@v2
with:
node-version: '14'
- run: npm ci
- run: npm test
env:
BRX_API_KEY: ${{ secrets.BRX_API_KEY_TEST }}
deploy:
needs: test
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- name: Deploy to AWS
uses: serverless/github-action@v3
with:
args: deploy --stage prod
env:
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
BRX_API_KEY: ${{ secrets.BRX_API_KEY_PROD }}
Deployment Stages
A typical CI/CD pipeline for BRX applications includes:
- Build: Compile code, bundle assets
- Test: Run unit tests, integration tests
- Deploy to Staging: Deploy to a staging environment
- Acceptance Testing: Run automated acceptance tests
- Deploy to Production: Deploy to the production environment
- Monitoring: Monitor the application for issues
Scaling Strategies
Horizontal Scaling
To handle increased load, implement horizontal scaling:
// Load balancer configuration (example for Express.js with cluster)
import cluster from 'cluster';
import os from 'os';
import express from 'express';
import BRX from 'brx-node';
const numCPUs = os.cpus().length;
if (cluster.isMaster) {
console.log(`Master ${process.pid} is running`);
// Fork workers
for (let i = 0; i < numCPUs; i++) {
cluster.fork();
}
cluster.on('exit', (worker) => {
console.log(`Worker ${worker.process.pid} died`);
cluster.fork(); // Replace the dead worker
});
} else {
// Workers can share any TCP connection
const app = express();
const port = process.env.PORT || 3000;
// Initialize BRX client
const brx = new BRX(process.env.BRX_API_KEY);
// Express.js setup
app.use(express.json());
// API endpoints
app.post('/execute', async (req, res) => {
// BRK execution code
});
app.listen(port, () => {
console.log(`Worker ${process.pid} started on port ${port}`);
});
}
Caching
Implement caching to reduce API calls and improve performance:
import NodeCache from 'node-cache';
import BRX, { BRK } from 'brx-node';
// Initialize cache
const cache = new NodeCache({ stdTTL: 3600 }); // 1 hour TTL
// Initialize BRX client
const brx = new BRX(process.env.BRX_API_KEY);
// Execute BRK with caching
async function executeBrkWithCache(brkId, inputs) {
// Create cache key
const cacheKey = `${brkId}:${JSON.stringify(inputs)}`;
// Check cache
const cachedResult = cache.get(cacheKey);
if (cachedResult) {
return cachedResult;
}
// Fetch BRK schema
const brkSchema = await brx.get(brkId);
const myBrk = new BRK(brkSchema);
// Set inputs
for (const [key, value] of Object.entries(inputs)) {
myBrk.input[key] = value;
}
// Execute BRK
const result = await brx.run(myBrk);
// Cache result
cache.set(cacheKey, result);
return result;
}
Rate Limiting
Implement rate limiting to prevent abuse and manage costs:
import rateLimit from 'express-rate-limit';
import express from 'express';
const app = express();
// Rate limiting middleware
const apiLimiter = rateLimit({
windowMs: 15 * 60 * 1000, // 15 minutes
max: 100, // limit each IP to 100 requests per windowMs
message: 'Too many requests from this IP, please try again after 15 minutes'
});
// Apply rate limiting to API endpoints
app.post('/execute', apiLimiter, async (req, res) => {
// BRK execution code
});
Monitoring and Logging
Logging
Implement comprehensive logging for troubleshooting:
import winston from 'winston';
import BRX, { BRK } from 'brx-node';
// Configure logger
const logger = winston.createLogger({
level: 'info',
format: winston.format.json(),
defaultMeta: { service: 'brx-app' },
transports: [
new winston.transports.File({ filename: 'error.log', level: 'error' }),
new winston.transports.File({ filename: 'combined.log' })
]
});
// Add console transport in development
if (process.env.NODE_ENV !== 'production') {
logger.add(new winston.transports.Console({
format: winston.format.simple()
}));
}
// Initialize BRX client
const brx = new BRX(process.env.BRX_API_KEY);
// Execute BRK with logging
async function executeBrkWithLogging(brkId, inputs) {
logger.info('Executing BRK', { brkId, inputs });
try {
// Fetch BRK schema
const brkSchema = await brx.get(brkId);
const myBrk = new BRK(brkSchema);
// Set inputs
for (const [key, value] of Object.entries(inputs)) {
myBrk.input[key] = value;
}
// Execute BRK
const startTime = Date.now();
const result = await brx.run(myBrk);
const executionTime = Date.now() - startTime;
logger.info('BRK execution successful', {
brkId,
executionTime,
resultCount: result.length
});
return result;
} catch (error) {
logger.error('BRK execution failed', {
brkId,
inputs,
error: error.message,
stack: error.stack
});
throw error;
}
}
Monitoring
Implement monitoring to track application health and performance:
Prometheus Metrics (with Express.js)
import express from 'express';
import promClient from 'prom-client';
import BRX from 'brx-node';
// Initialize Express app
const app = express();
// Initialize Prometheus client
const register = new promClient.Registry();
promClient.collectDefaultMetrics({ register });
// Custom metrics
const brkExecutionCounter = new promClient.Counter({
name: 'brx_brk_executions_total',
help: 'Total number of BRK executions',
labelNames: ['brk_id', 'status']
});
const brkExecutionDuration = new promClient.Histogram({
name: 'brx_brk_execution_duration_seconds',
help: 'Duration of BRK executions in seconds',
labelNames: ['brk_id']
});
register.registerMetric(brkExecutionCounter);
register.registerMetric(brkExecutionDuration);
// Initialize BRX client
const brx = new BRX(process.env.BRX_API_KEY);
// Metrics endpoint
app.get('/metrics', async (req, res) => {
res.set('Content-Type', register.contentType);
res.end(await register.metrics());
});
// Execute BRK endpoint with metrics
app.post('/execute', async (req, res) => {
const { brkId, inputs } = req.body;
const end = brkExecutionDuration.startTimer({ brk_id: brkId });
try {
// Fetch BRK schema
const brkSchema = await brx.get(brkId);
const myBrk = new BRK(brkSchema);
// Set inputs
for (const [key, value] of Object.entries(inputs)) {
myBrk.input[key] = value;
}
// Execute BRK
const result = await brx.run(myBrk);
// Record metrics
brkExecutionCounter.inc({ brk_id: brkId, status: 'success' });
end();
// Return response
res.json({
success: true,
result: result[0].brxRes.output
});
} catch (error) {
// Record metrics
brkExecutionCounter.inc({ brk_id: brkId, status: 'error' });
end();
res.status(500).json({
success: false,
error: error.message
});
}
});
Health Checks
Implement health checks to monitor application status:
import express from 'express';
import BRX from 'brx-node';
const app = express();
const brx = new BRX(process.env.BRX_API_KEY);
// Basic health check
app.get('/health', (req, res) => {
res.status(200).json({ status: 'ok' });
});
// Detailed health check
app.get('/health/detailed', async (req, res) => {
try {
// Check BRX API connectivity
const testBrkSchema = await brx.get('test-brk-id');
res.status(200).json({
status: 'ok',
details: {
brxApi: 'connected',
uptime: process.uptime(),
memory: process.memoryUsage()
}
});
} catch (error) {
res.status(503).json({
status: 'error',
details: {
brxApi: 'disconnected',
error: error.message
}
});
}
});
Security Best Practices
Validate all inputs to prevent injection attacks:
import Joi from 'joi';
import express from 'express';
const app = express();
app.use(express.json());
// Input validation middleware
const validateExecuteRequest = (req, res, next) => {
const schema = Joi.object({
brkId: Joi.string().required(),
inputs: Joi.object().required()
});
const { error } = schema.validate(req.body);
if (error) {
return res.status(400).json({
success: false,
error: error.details[0].message
});
}
next();
};
// Apply validation to endpoint
app.post('/execute', validateExecuteRequest, async (req, res) => {
// BRK execution code
});
Authentication and Authorization
Implement proper authentication and authorization:
import express from 'express';
import jwt from 'jsonwebtoken';
const app = express();
app.use(express.json());
// Authentication middleware
const authenticate = (req, res, next) => {
const authHeader = req.headers.authorization;
if (!authHeader || !authHeader.startsWith('Bearer ')) {
return res.status(401).json({
success: false,
error: 'Authentication required'
});
}
const token = authHeader.split(' ')[1];
try {
const decoded = jwt.verify(token, process.env.JWT_SECRET);
req.user = decoded;
next();
} catch (error) {
return res.status(401).json({
success: false,
error: 'Invalid token'
});
}
};
// Authorization middleware
const authorize = (requiredRole) => (req, res, next) => {
if (!req.user || req.user.role !== requiredRole) {
return res.status(403).json({
success: false,
error: 'Insufficient permissions'
});
}
next();
};
// Apply authentication and authorization to endpoint
app.post('/execute', authenticate, authorize('admin'), async (req, res) => {
// BRK execution code
});
HTTPS
Always use HTTPS in production:
import express from 'express';
import https from 'https';
import fs from 'fs';
const app = express();
// Redirect HTTP to HTTPS
app.use((req, res, next) => {
if (!req.secure && process.env.NODE_ENV === 'production') {
return res.redirect(`https://${req.headers.host}${req.url}`);
}
next();
});
// HTTPS server options
const httpsOptions = {
key: fs.readFileSync('path/to/private/key.pem'),
cert: fs.readFileSync('path/to/certificate.pem')
};
// Create HTTPS server
https.createServer(httpsOptions, app).listen(443, () => {
console.log('HTTPS server running on port 443');
});
// Also start HTTP server for redirect
app.listen(80, () => {
console.log('HTTP server running on port 80');
});
Deployment Checklist
Before deploying your BRX application to production, ensure you’ve addressed the following:
Pre-Deployment
Post-Deployment
Conclusion
Deploying BRX applications to production requires careful planning and attention to detail. By following the best practices and strategies outlined in this guide, you can ensure your BRX applications are reliable, secure, and performant.
Remember that deployment is not a one-time event but an ongoing process. Continuously monitor your application, gather feedback, and make improvements to provide the best possible experience for your users.